Proximal Gradient Methods with Adaptive Subspace Sampling

نویسندگان

چکیده

Many applications in machine learning or signal processing involve nonsmooth optimization problems. This nonsmoothness brings a low-dimensional structure to the optimal solutions. In this paper, we propose randomized proximal gradient method harnessing underlying structure. We introduce two key components: (i) random subspace algorithm; and (ii) an identification-based sampling of subspaces. Their interplay significant performance improvement on typical problems terms dimensions explored.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Distributed Delayed Proximal Gradient Methods

We analyze distributed optimization algorithms where parts of data and variables are distributed over several machines and synchronization occurs asynchronously. We prove convergence for the general case of a nonconvex objective plus a convex and possibly nonsmooth penalty. We demonstrate two challenging applications, `1-regularized logistic regression and reconstruction ICA, and present experi...

متن کامل

Momentum and Stochastic Momentum for Stochastic Gradient, Newton, Proximal Point and Subspace Descent Methods

In this paper we study several classes of stochastic optimization algorithms enriched with heavy ball momentum. Among the methods studied are: stochastic gradient descent, stochastic Newton, stochastic proximal point and stochastic dual subspace ascent. This is the first time momentum variants of several of these methods are studied. We choose to perform our analysis in a setting in which all o...

متن کامل

Scalable Nuclear-norm Minimization by Subspace Pursuit Proximal Riemannian Gradient

Trace-norm regularization plays a vital role in many learning tasks, such as low-rank matrix recovery (MR), and low-rank representation (LRR). Solving this problem directly can be computationally expensive due to the unknown rank of variables or large-rank singular value decompositions (SVDs). To address this, we propose a proximal Riemannian gradient (PRG) scheme which can efficiently solve tr...

متن کامل

Krylov Subspace Methods in Dynamical Sampling

Let B be an unknown linear evolution process on C ' `(Zd) driving an unknown initial state x and producing the states {Bx, ` = 0, 1, . . .} at different time levels. The problem under consideration in this paper is to find as much information as possible about B and x from the measurements Y = {x(i), Bx(i), . . . , Bix(i) : i ∈ Ω ⊂ Z}. If B is a “low-pass” convolution operator, we show that we ...

متن کامل

Sampling Methods for Random Subspace Domain Adaptation

Supervised classification tasks like Sentiment Analysis or text classification need labelled training data. These labels can be difficult to obtain, especially for complicated and ambiguous data like texts. Instead of labelling new data, domain adaptation tries to reuse already labelled data from related tasks as training data. We propose a greedy selection strategy to identify a small subset o...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Mathematics of Operations Research

سال: 2021

ISSN: ['0364-765X', '1526-5471']

DOI: https://doi.org/10.1287/moor.2020.1092